With the growth of high-dimensional sparse data in web-scale recommender systems, the computational cost to learn high-order feature interaction in CTR prediction task largely increases, which limits the use of high-order interaction models in real industrial applications. Some recent knowledge distillation based methods transfer knowledge from complex teacher models to shallow student models for accelerating the online model inference. However, they suffer from the degradation of model accuracy in knowledge distillation process. It is challenging to balance the efficiency and effectiveness of the shallow student models. To address this problem, we propose a Directed Acyclic Graph Factorization Machine (KD-DAGFM) to learn the high-order feature interactions from existing complex interaction models for CTR prediction via Knowledge Distillation. The proposed lightweight student model DAGFM can learn arbitrary explicit feature interactions from teacher networks, which achieves approximately lossless performance and is proved by a dynamic programming algorithm. Besides, an improved general model KD-DAGFM+ is shown to be effective in distilling both explicit and implicit feature interactions from any complex teacher model. Extensive experiments are conducted on four real-world datasets, including a large-scale industrial dataset from WeChat platform with billions of feature dimensions. KD-DAGFM achieves the best performance with less than 21.5% FLOPs of the state-of-the-art method on both online and offline experiments, showing the superiority of DAGFM to deal with the industrial scale data in CTR prediction task. Our implementation code is available at: https://github.com/RUCAIBox/DAGFM.
translated by 谷歌翻译
这项研究提出了一种基于深度学习的超声(US)图像引导放射疗法的跟踪方法。拟议的级联深度学习模型由注意力网络,基于掩模区域的卷积神经网络(Mask R-CNN)和长期短期记忆(LSTM)网络组成。注意网络从美国图像到可疑的具有里程碑意义的运动区域,以减少搜索区域。然后,面膜R-CNN在减少区域中产生多个利益区域(ROI)建议,并通过三个网络头确定拟议的地标:边界框回归,提案分类和地标分段。 LSTM网络对连续的图像框架之间的时间关系建模,以进行边界框回归和建议分类。为了合并最终建议,根据顺序框架之间的相似性设计选择方法。该方法在肝脏美国跟踪数据集中测试了医疗图像计算和计算机辅助干预措施(MICCAI)2015年的挑战,其中有三位经验丰富的观察者注释了地标,以获得其平均位置。在24个鉴于我们具有地面真相的序列的24个序列上,所有地标的平均跟踪误差为0.65 +/- 0.56毫米,所有地标的误差均在2 mm之内。我们进一步测试了从测试数据集中的69个地标上提出的模型,该模型具有与训练模式相似的图像模式,从而导致平均跟踪误差为0.94 +/- 0.83 mm。我们的实验结果表明,我们提出的方法使用US图像跟踪肝解剖学地标的可行性和准确性,为放射治疗期间的主动运动管理提供了潜在的解决方案。
translated by 谷歌翻译
在图像之间生成健壮和可靠的对应关系是多种应用程序的基本任务。为了在全球和局部粒度上捕获上下文,我们提出了Aspanformer,这是一种基于变压器的无探测器匹配器,建立在层次的注意力结构上,采用了一种新颖的注意操作,能够以自适应方式调整注意力跨度。为了实现这一目标,首先,在每个跨注意阶段都会回归流图,以定位搜索区域的中心。接下来,在中心周围生成一个采样网格,其大小不是根据固定的经验配置为固定的,而是根据与流图一起估计的像素不确定性的自适应计算。最后,在派生区域内的两个图像上计算注意力,称为注意跨度。通过这些方式,我们不仅能够维持长期依赖性,而且能够在高相关性的像素之间获得细粒度的注意,从而补偿基本位置和匹配任务中的零件平滑度。在广泛的评估基准上的最新准确性验证了我们方法的强匹配能力。
translated by 谷歌翻译
域自适应文本分类对于大规模预处理的语言模型来说是一个具有挑战性的问题,因为它们通常需要昂贵的额外标记数据来适应新域。现有作品通常无法利用跨域单词之间的隐式关系。在本文中,我们提出了一种新的方法,称为结构化知识(DASK)的域适应性,以通过利用单词级别的语义关系来增强域的适应性。 Dask首先构建知识图,以捕获目标域中的枢轴项(独立域单词)和非居式项之间的关系。然后在训练期间,DASK注入与源域文本的枢轴相关知识图信息。对于下游任务,这些注入知识的文本被馈入能够处理知识注入文本数据的BERT变体。多亏了知识注入,我们的模型根据与枢轴的关系学习了非客者的域不变特征。 DASK通过在使用伪标签训练期间通过候选枢轴的极性得分动态推断出具有域不变行为的枢轴。我们在各种跨域情绪分类任务上验证了DASK,并观察到20种不同领域对的基准的绝对性能提高了2.9%。代码将在https://github.com/hikaru-nara/dask上提供。
translated by 谷歌翻译
近年来,在实际场景中,单图(SID)引起了人们的关注。由于难以获得真实世界/清洁图像对,因此以前的真实数据集遭受了低分辨率图像,均匀的雨条,背景变化有限,甚至对图像对的不对准,从而对SID方法进行了不可思议的评估。为了解决这些问题,我们建立了一个名为Realrain-1K的新的高质量数据集,该数据集分别由1,120美元的高分辨率配对的清洁和高雨图像组成,分别具有低密度和高密度降雨条纹。 Realrain-1K中的图像是通过简单而有效的降雨密度可控制的过滤方法自动从大量现实世界中的雨滴剪辑中生成结盟。 Realrain-1K还提供丰富的雨条层作为副产品,使我们能够通过将雨条层粘贴在丰富的自然图像上,从而构建一个名为Synrain-13K的大规模合成数据集。基于它们和现有数据集,我们在三个曲目上基准了10种代表性的SID方法:(1)对Realrain-1K的全面监督学习,(2)域对真实数据集进行概括,以及(3)SYN-to-eal Toth-to to real Transvers Learning 。实验结果(1)显示了图像恢复性能和模型复杂性中代表性方法的差异,(2)验证所提出的数据集在模型概括中的重要性,(3)提供了有关从不同领域和从不同领域和学习的优越性的有用见解。关于现实世界中SID的未来研究的灯光。数据集将在https://github.com/hiker-lw/realrain-1k上发布
translated by 谷歌翻译
人工智能(AI)为简化Covid-19诊断提供了有前景的替代。然而,涉及周围的安全和可信度的担忧阻碍了大规模代表性的医学数据,对临床实践中训练广泛的模型造成了相当大的挑战。为了解决这个问题,我们启动了统一的CT-Covid AI诊断计划(UCADI),其中AI模型可以在没有数据共享的联合学习框架(FL)下在每个主机机构下分发和独立地在没有数据共享的情况下在每个主机机构上执行。在这里,我们认为我们的FL模型通过大的产量(中国测试敏感性/特异性:0.973 / 0.951,英国:0.730 / 0.942),与专业放射科医师的面板实现可比性表现。我们进一步评估了持有的模型(从另外两家医院收集,留出FL)和异构(用造影材料获取)数据,提供了模型所做的决策的视觉解释,并分析了模型之间的权衡联邦培训过程中的性能和沟通成本。我们的研究基于来自位于中国和英国的23家医院的3,336名患者的9,573次胸部计算断层扫描扫描(CTS)。统称,我们的工作提出了利用联邦学习的潜在保留了数字健康的前景。
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译
Knowledge graph embedding (KGE), which maps entities and relations in a knowledge graph into continuous vector spaces, has achieved great success in predicting missing links in knowledge graphs. However, knowledge graphs often contain incomplete triples that are difficult to inductively infer by KGEs. To address this challenge, we resort to analogical inference and propose a novel and general self-supervised framework AnKGE to enhance KGE models with analogical inference capability. We propose an analogical object retriever that retrieves appropriate analogical objects from entity-level, relation-level, and triple-level. And in AnKGE, we train an analogy function for each level of analogical inference with the original element embedding from a well-trained KGE model as input, which outputs the analogical object embedding. In order to combine inductive inference capability from the original KGE model and analogical inference capability enhanced by AnKGE, we interpolate the analogy score with the base model score and introduce the adaptive weights in the score function for prediction. Through extensive experiments on FB15k-237 and WN18RR datasets, we show that AnKGE achieves competitive results on link prediction task and well performs analogical inference.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译